-
Notifications
You must be signed in to change notification settings - Fork 1.3k
EHN accept one-vs-all encoding for labels #410
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EHN accept one-vs-all encoding for labels #410
Conversation
@massich @mrastgoo @chkoar This is to support one-vs-all encoding targets. It is ready for review or at least some comment regarding the internal changes. |
Codecov Report
@@ Coverage Diff @@
## master #410 +/- ##
==========================================
- Coverage 98.78% 98.77% -0.01%
==========================================
Files 68 68
Lines 3961 4014 +53
==========================================
+ Hits 3913 3965 +52
- Misses 48 49 +1
Continue to review full report at Codecov.
|
Returns | ||
------- | ||
X_resampled : {ndarray, sparse matrix}, shape \ | ||
(n_subset, n_samples_new, n_features) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
indenting
LGTM |
Check the doc for the reason of not indenting.
…On 19 March 2018 at 14:40, Joan Massich ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In imblearn/ensemble/base.py
<#410 (comment)>
:
> +
+ def sample(self, X, y):
+ """Resample the dataset.
+
+ Parameters
+ ----------
+ X : {array-like, sparse matrix}, shape (n_samples, n_features)
+ Matrix containing the data which have to be sampled.
+
+ y : array-like, shape (n_samples,)
+ Corresponding label for each sample in X.
+
+ Returns
+ -------
+ X_resampled : {ndarray, sparse matrix}, shape \
+(n_subset, n_samples_new, n_features)
indenting
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#410 (review)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AHG9PyfVwQgJ_7i16A2_6KAXiQuAKqfjks5tf7VMgaJpZM4SYISL>
.
--
Guillaume Lemaitre
INRIA Saclay - Parietal team
Center for Data Science Paris-Saclay
https://glemaitre.github.io/
|
imblearn/__init__.py
Outdated
@@ -13,6 +13,9 @@ | |||
exceptions | |||
Module including custom warnings and error clases used across | |||
imbalanced-learn. | |||
keras |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should not be there
This PR attend to provide some utilities for keras: - [x] support for one-vs-all encoded targets (#410) - [x] balanced batch generator TODO: - [x] Add common test to check multiclass == multilabel-indicator (#410) - [x] Manage the specificity of the EasyEnsemble and BalanceCascade (overwrite `sample`) - [x] Add user guide documentation - [x] Add an example for simple use - [x] Add an example for deep training - [x] Add substitution - [x] What's new - [x] Optional depencies
This PR allows user to provide targets which are one-vs-all encoded.
This is widely used in Keras for the loss function.
This is part of #409